Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Soybean (Glycine max[L.] Merr.) production is susceptible to biotic and abiotic stresses, exacerbated by extreme weather events. Water limiting stress, that is, drought, emerges as a significant risk for soybean production, underscoring the need for advancements in stress monitoring for crop breeding and production. This project combined multi‐modal information to identify the most effective and efficient automated methods to study drought response. We investigated a set of diverse soybean accessions using multiple sensors in a time series high‐throughput phenotyping manner to: (1) develop a pipeline for rapid classification of soybean drought stress symptoms, and (2) investigate methods for early detection of drought stress. We utilized high‐throughput time‐series phenotyping using unmanned aerial vehicles and sensors in conjunction with machine learning analytics, which offered a swift and efficient means of phenotyping. The visible bands were most effective in classifying the severity of canopy wilting stress after symptom emergence. Non‐visual bands in the near‐infrared region and short‐wave infrared region contribute to the differentiation of susceptible and tolerant soybean accessions prior to visual symptom development. We report pre‐visual detection of soybean wilting using a combination of different vegetation indices and spectral bands, especially in the red‐edge. These results can contribute to early stress detection methodologies and rapid classification of drought responses for breeding and production applications.more » « less
-
Yooseph, Shibu (Ed.)Abstract Insect pests significantly impact global agricultural productivity and crop quality. Effective integrated pest management strategies require the identification of insects, including beneficial and harmful insects. Automated identification of insects under real-world conditions presents several challenges, including the need to handle intraspecies dissimilarity and interspecies similarity, life-cycle stages, camouflage, diverse imaging conditions, and variability in insect orientation. An end-to-end approach for training deep-learning models, InsectNet, is proposed to address these challenges. Our approach has the following key features: (i) uses a large dataset of insect images collected through citizen science along with label-free self-supervised learning to train a global model, (ii) fine-tuning this global model using smaller, expert-verified regional datasets to create a local insect identification model, (iii) which provides high prediction accuracy even for species with small sample sizes, (iv) is designed to enhance model trustworthiness, and (v) democratizes access through streamlined machine learning operations. This global-to-local model strategy offers a more scalable and economically viable solution for implementing advanced insect identification systems across diverse agricultural ecosystems. We report accurate identification (>96% accuracy) of numerous agriculturally and ecologically relevant insect species, including pollinators, parasitoids, predators, and harmful insects. InsectNet provides fine-grained insect species identification, works effectively in challenging backgrounds, and avoids making predictions when uncertain, increasing its utility and trustworthiness. The model and associated workflows are available through a web-based portal accessible through a computer or mobile device. We envision InsectNet to complement existing approaches, and be part of a growing suite of AI technologies for addressing agricultural challenges.more » « less
-
Abstract Estimates of plant traits derived from hyperspectral reflectance data have the potential to efficiently substitute for traits, which are time or labor intensive to manually score. Typical workflows for estimating plant traits from hyperspectral reflectance data employ supervised classification models that can require substantial ground truth datasets for training. We explore the potential of an unsupervised approach, autoencoders, to extract meaningful traits from plant hyperspectral reflectance data using measurements of the reflectance of 2151 individual wavelengths of light from the leaves of maize (Zea mays) plants harvested from 1658 field plots in a replicated field trial. A subset of autoencoder‐derived variables exhibited significant repeatability, indicating that a substantial proportion of the total variance in these variables was explained by difference between maize genotypes, while other autoencoder variables appear to capture variation resulting from changes in leaf reflectance between different batches of data collection. Several of the repeatable latent variables were significantly correlated with other traits scored from the same maize field experiment, including one autoencoder‐derived latent variable (LV8) that predicted plant chlorophyll content modestly better than a supervised model trained on the same data. In at least one case, genome‐wide association study hits for variation in autoencoder‐derived variables were proximal to genes with known or plausible links to leaf phenotypes expected to alter hyperspectral reflectance. In aggregate, these results suggest that an unsupervised, autoencoder‐based approach can identify meaningful and genetically controlled variation in high‐dimensional, high‐throughput phenotyping data and link identified variables back to known plant traits of interest.more » « less
-
Automated canopy stress classification for field crops has traditionally relied on single-perspective, two-dimensional (2D) photographs, usually obtained through top-view imaging using unmanned aerial vehicles (UAVs). However, this approach may fail to capture the full extent of plant stress symptoms, which can manifest throughout the canopy. Recent advancements in LiDAR technologies have enabled the acquisition of high-resolution 3D point cloud data for the entire canopy, offering new possibilities for more accurate plant stress identification and rating. This study explores the potential of leveraging 3D point cloud data for improved plant stress assessment. We utilized a dataset of RGB 3D point clouds of 700 soybean plants from a diversity panel exposed to iron deficiency chlorosis (IDC) stress. From this unique set of 700 canopies exhibiting varying levels of IDC, we extracted several representations, including (a) handcrafted IDC symptom-specific features, (b) canopy fingerprints, and (c) latent feature-based features. Subsequently, we trained several classification models to predict plant stress severity using these representations. We exhaustively investigated several stress representations and model combinations for the 3-D data. We also compared the performance of these classification models against similar models that are only trained using the associated top-view 2D RGB image for each plant. Among the feature-model combinations tested, the 3D canopy fingerprint features trained with a support vector machine yielded the best performance, achieving higher classification accuracy than the best-performing model based on 2D data built using convolutional neural networks. Our findings demonstrate the utility of color canopy fingerprinting and underscore the importance of considering 3D data to assess plant stress in agricultural applications.more » « less
-
IntroductionComputer vision and deep learning (DL) techniques have succeeded in a wide range of diverse fields. Recently, these techniques have been successfully deployed in plant science applications to address food security, productivity, and environmental sustainability problems for a growing global population. However, training these DL models often necessitates the large-scale manual annotation of data which frequently becomes a tedious and time-and-resource- intensive process. Recent advances in self-supervised learning (SSL) methods have proven instrumental in overcoming these obstacles, using purely unlabeled datasets to pre-train DL models. MethodsHere, we implement the popular self-supervised contrastive learning methods of NNCLR Nearest neighbor Contrastive Learning of visual Representations) and SimCLR (Simple framework for Contrastive Learning of visual Representations) for the classification of spatial orientation and segmentation of embryos of maize kernels. Maize kernels are imaged using a commercial high-throughput imaging system. This image data is often used in multiple downstream applications across both production and breeding applications, for instance, sorting for oil content based on segmenting and quantifying the scutellum’s size and for classifying haploid and diploid kernels. Results and discussionWe show that in both classification and segmentation problems, SSL techniques outperform their purely supervised transfer learning-based counterparts and are significantly more annotation efficient. Additionally, we show that a single SSL pre-trained model can be efficiently finetuned for both classification and segmentation, indicating good transferability across multiple downstream applications. Segmentation models with SSL-pretrained backbones produce DICE similarity coefficients of 0.81, higher than the 0.78 and 0.73 of those with ImageNet-pretrained and randomly initialized backbones, respectively. We observe that finetuning classification and segmentation models on as little as 1% annotation produces competitive results. These results show SSL provides a meaningful step forward in data efficiency with agricultural deep learning and computer vision.more » « less
-
Advances in imaging hardware allow high throughput capture of the detailed three-dimensional (3D) structure of plant canopies. The point cloud data is typically post-processed to extract coarse-scale geometric features (like volume, surface area, height, etc.) for downstream analysis. We extend feature extraction from 3D point cloud data to various additional features, which we denote as ‘canopy fingerprints’. This is motivated by the successful application of the fingerprint concept for molecular fingerprints in chemistry applications and acoustic fingerprints in sound engineering applications. We developed an end-to-end pipeline to generate canopy fingerprints of a three-dimensional point cloud of soybean [Glycine max(L.) Merr.] canopies grown in hill plots captured by a terrestrial laser scanner (TLS). The pipeline includes noise removal, registration, and plot extraction, followed by the canopy fingerprint generation. The canopy fingerprints are generated by splitting the data into multiple sub-canopy scale components and extracting sub-canopy scale geometric features. The generated canopy fingerprints are interpretable and can assist in identifying patterns in a database of canopies, querying similar canopies, or identifying canopies with a certain shape. The framework can be extended to other modalities (for instance, hyperspectral point clouds) and tuned to find the most informative fingerprint representation for downstream tasks. These canopy fingerprints can aid in the utilization of canopy traits at previously unutilized scales, and therefore have applications in plant breeding and resilient crop production.more » « less
An official website of the United States government
